57 research outputs found

    FRESH – FRI-based single-image super-resolution algorithm

    Get PDF
    In this paper, we consider the problem of single image super-resolution and propose a novel algorithm that outperforms state-of-the-art methods without the need of learning patches pairs from external data sets. We achieve this by modeling images and, more precisely, lines of images as piecewise smooth functions and propose a resolution enhancement method for this type of functions. The method makes use of the theory of sampling signals with finite rate of innovation (FRI) and combines it with traditional linear reconstruction methods. We combine the two reconstructions by leveraging from the multi-resolution analysis in wavelet theory and show how an FRI reconstruction and a linear reconstruction can be fused using filter banks. We then apply this method along vertical, horizontal, and diagonal directions in an image to obtain a single-image super-resolution algorithm. We also propose a further improvement of the method based on learning from the errors of our super-resolution result at lower resolution levels. Simulation results show that our method outperforms state-of-the-art algorithms under different blurring kernels

    Solving physics-driven inverse problems via structured least squares

    Get PDF
    Numerous physical phenomena are well modeled by partial differential equations (PDEs); they describe a wide range of phenomena across many application domains, from model- ing EEG signals in electroencephalography to, modeling the release and propagation of toxic substances in environmental monitoring. In these applications it is often of interest to find the sources of the resulting phenomena, given some sparse sensor measurements of it. This will be the main task of this work. Specifically, we will show that finding the sources of such PDE-driven fields can be turned into solving a class of well-known multi-dimensional structured least squares prob- lems. This link is achieved by leveraging from recent results in modern sampling theory – in particular, the approximate Strang-Fix theory. Subsequently, numerical simulation re- sults are provided in order to demonstrate the validity and robustness of the proposed framework

    Sampling streams of pulses with unknown shapes

    Get PDF
    This paper extends the class of continuous-time signals that can be perfectly reconstructed by developing a theory for the sampling and exact reconstruction of streams of short pulses with unknown shapes. The single pulse is modelled as the delayed version of a wavelet-sparse signal, which is normally not band limited. As the delay can be an arbitrary real number, it is hard to develop an exact sampling result for this type of signals. We achieve the exact reconstruction of the pulses by using only the knowledge of the Fourier transform of the signal at specific frequencies. We further introduce a multi-channel acquisition system which uses a new family of compact-support sampling kernels for extracting the Fourier information from the samples. The shape of the kernel is independent of the wavelet basis in which the pulse is sparse and hence the same acquisition system can be used with pulses which are sparse on different wavelet bases. By exploiting the fact that pulses have short duration and that the sampling kernels have compact support, we finally propose a local and sequential algorithm to reconstruct streaming pulses from the samples

    An image recapture detection algorithm based on learning dictionaries of edge profiles

    Get PDF
    With today's digital camera technology, high-quality images can be recaptured from an liquid crystal display (LCD) monitor screen with relative ease. An attacker may choose to recapture a forged image in order to conceal imperfections and to increase its authenticity. In this paper, we address the problem of detecting images recaptured from LCD monitors. We provide a comprehensive overview of the traces found in recaptured images, and we argue that aliasing and blurriness are the least scene dependent features. We then show how aliasing can be eliminated by setting the capture parameters to predetermined values. Driven by this finding, we propose a recapture detection algorithm based on learned edge blurriness. Two sets of dictionaries are trained using the K-singular value decomposition approach from the line spread profiles of selected edges from single captured and recaptured images. An support vector machine classifier is then built using dictionary approximation errors and the mean edge spread width from the training images. The algorithm, which requires no user intervention, was tested on a database that included more than 2500 high-quality recaptured images. Our results show that our method achieves a performance rate that exceeds 99% for recaptured images and 94% for single captured images

    Sparse sampling of signal innovations

    Get PDF
    Sparse sampling of continuous-time sparse signals is addressed. In particular, it is shown that sampling at the rate of innovation is possible, in some sense applying Occam's razor to the sampling of sparse signals. The noisy case is analyzed and solved, proposing methods reaching the optimal performance given by the Cramer-Rao bounds. Finally, a number of applications have been discussed where sparsity can be taken advantage of. The comprehensive coverage given in this article should lead to further research in sparse sampling, as well as new applications. One main application to use the theory presented in this article is ultra-wide band (UWB) communications

    Comparing synthetic refocusing to deconvolution for the extraction of neuronal calcium transients from light fields

    Get PDF
    Significance: Light-field microscopy (LFM) enables fast, light-efficient, volumetric imaging of neuronal activity with calcium indicators. Calcium transients differ in temporal signal-to-noise ratio (tSNR) and spatial confinement when extracted from volumes reconstructed by different algorithms. Aim: We evaluated the capabilities and limitations of two light-field reconstruction algorithms for calcium fluorescence imaging. Approach: We acquired light-field image series from neurons either bulk-labeled or filled intracellularly with the red-emitting calcium dye CaSiR-1 in acute mouse brain slices. We compared the tSNR and spatial onfinement of calcium signals extracted from volumes reconstructed with synthetic refocusing and Richardson-Lucy 3D deconvolution with and without total variation regularization. Results: Both synthetic refocusing and Richardson-Lucy deconvolution resolved calcium signals from single cells and neuronal dendrites in three dimensions. Increasing deconvolution iteration number improved spatial confinement but reduced tSNR compared to synthetic refocusing. Volumetric light-field imaging did not decrease calcium signal tSNR compared to interleaved, widefield image series acquired in matched planes. Conclusions: LFM enables high-volume rate, volumetric imaging of calcium transients in single cells (bulk-labeled), somata and dendrites (intracellular loaded). The trade-offs identified for tSNR, spatial confinement, and computational cost indicate which of synthetic refocusing or deconvolution can better realize the scientific requirements of future LFM calcium imaging applications

    Moment inversion problem for piecewise D-finite functions

    Full text link
    We consider the problem of exact reconstruction of univariate functions with jump discontinuities at unknown positions from their moments. These functions are assumed to satisfy an a priori unknown linear homogeneous differential equation with polynomial coefficients on each continuity interval. Therefore, they may be specified by a finite amount of information. This reconstruction problem has practical importance in Signal Processing and other applications. It is somewhat of a ``folklore'' that the sequence of the moments of such ``piecewise D-finite''functions satisfies a linear recurrence relation of bounded order and degree. We derive this recurrence relation explicitly. It turns out that the coefficients of the differential operator which annihilates every piece of the function, as well as the locations of the discontinuities, appear in this recurrence in a precisely controlled manner. This leads to the formulation of a generic algorithm for reconstructing a piecewise D-finite function from its moments. We investigate the conditions for solvability of the resulting linear systems in the general case, as well as analyze a few particular examples. We provide results of numerical simulations for several types of signals, which test the sensitivity of the proposed algorithm to noise

    Shift-invariant-subspace discretization and volume reconstruction for light field microscopy

    Get PDF
    Light Field Microscopy (LFM) is an imaging technique that captures 3D spatial information with a single 2D image. LFM is attractive because of its relatively simple implementation and fast volume acquisition rate. Capturing volume time series at a camera frame rate can enable the study of the behaviour of many biological systems. For instance, it could provide insights into the communication dynamics of living 3D neural networks. However, conventional 3D reconstruction algorithms for LFM typically suffer from high computational cost, low lateral resolution, and reconstruction artifacts. In this work, we study the origin of these issues and propose novel techniques to improve the performance of the reconstruction process. First, we propose a discretization approach that uses shift-invariant subspaces to generalize the typical discretization framework used in LFM. Then, we study the shift-invariant-subspace assumption as a prior for volume reconstruction under ideal conditions. Furthermore, we present a method to reduce the computational time of the forward model by using singular value decomposition (SVD). Finally, we propose to use iterative approaches that incorporate additional priors to perform artifact-free 3D reconstruction from real light field images. We experimentally show that our approach performs better than Richardson-Lucy-based strategies in computational time, image quality, and artifact reduction

    Multimodal image super-resolution via joint sparse representations induced by coupled dictionaries

    Get PDF
    Real-world data processing problems often involve various image modalities associated with a certain scene, including RGB images, infrared images, or multispectral images. The fact that different image modalities often share certain attributes, such as edges, textures, and other structure primitives, represents an opportunity to enhance various image processing tasks. This paper proposes a new approach to construct a high-resolution (HR) version of a low-resolution (LR) image, given another HR image modality as guidance, based on joint sparse representations induced by coupled dictionaries. The proposed approach captures complex dependency correlations, including similarities and disparities, between different image modalities in a learned sparse feature domain in lieu of the original image domain. It consists of two phases: coupled dictionary learning phase and coupled superresolution phase. The learning phase learns a set of dictionaries from the training dataset to couple different image modalities together in the sparse feature domain. In turn, the super-resolution phase leverages such dictionaries to construct an HR version of the LR target image with another related image modality for guidance. In the advanced version of our approach, multistage strategy and neighbourhood regression concept are introduced to further improve the model capacity and performance. Extensive guided image super-resolution experiments on real multimodal images demonstrate that the proposed approach admits distinctive advantages with respect to the state-of-the-art approaches, for example, overcoming the texture copying artifacts commonly resulting from inconsistency between the guidance and target images. Of particular relevance, the proposed model demonstrates much better robustness than competing deep models in a range of noisy scenarios
    • …
    corecore